Towards Minimax Policies for Online Linear Optimization with Bandit Feedback
نویسندگان
چکیده
We address the online linear optimization problem with bandit feedback. Our contribution is twofold. First, we provide an algorithm (based on exponential weights) with a regret of order √ dn logN for any finite action set with N actions, under the assumption that the instantaneous loss is bounded by 1. This shaves off an extraneous √ d factor compared to previous works, and gives a regret bound of order d √ n log n for any compact set of actions. Without further assumptions on the action set, this last bound is minimax optimal up to a logarithmic factor. Interestingly, our result also shows that the minimax regret for bandit linear optimization with expert advice in d dimension is the same as for the basic d-armed bandit with expert advice. Our second contribution is to show how to use the Mirror Descent algorithm to obtain computationally efficient strategies with minimax optimal regret bounds in specific examples. More precisely we study two canonical action sets: the hypercube and the Euclidean ball. In the former case, we obtain the first computationally efficient algorithm with a d √ n regret, thus improving by a factor √ d log n over the best known result for a computationally efficient algorithm. In the latter case, our approach gives the first algorithm with a √ dn log n regret, again shaving off an extraneous √ d compared to previous works.
منابع مشابه
Minimax Policies for Combinatorial Prediction Games
We address the online linear optimization problem when the actions of the forecaster are represented by binary vectors. Our goal is to understand the magnitude of the minimax regret for the worst possible set of actions. We study the problem under three different assumptions for the feedback: full information, and the partial information models of the so-called “semi-bandit”, and “bandit” probl...
متن کاملRegret in Online Combinatorial Optimization
We address online linear optimization problems when the possible actions of the decision maker are represented by binary vectors. The regret of the decision maker is the difference between her realized loss and the best loss she would have achieved by picking, in hindsight, the best possible action. Our goal is to understand the magnitude of the best possible (minimax) regret. We study the prob...
متن کاملOnline Linear Optimization through the Differential Privacy Lens
We develop a simple and powerful analysis technique for perturbation style online learning algorithms, based on privacy-preserving randomization, that exhibits a suite of novel results. In particular, this work highlights the valuable addition of differential privacymethods to the toolkit used to design and undestand online linear optimization tasks. This work describes the minimax optimal algo...
متن کاملOnline Learning with Composite Loss Functions
We study a new class of online learning problems where each of the online algorithm’s actions is assigned an adversarial value, and the loss of the algorithm at each step is a known and deterministic function of the values assigned to its recent actions. This class includes problems where the algorithm’s loss is the minimum over the recent adversarial values, the maximum over the recent values,...
متن کاملOptimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback
Bandit convex optimization is a special case of online convex optimization with partial information. In this setting, a player attempts to minimize a sequence of adversarially generated convex loss functions, while only observing the value of each function at a single point. In some cases, the minimax regret of these problems is known to be strictly worse than the minimax regret in the correspo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2012